Modeling Word Learning and Processing with Recurrent Neural Networks

نویسندگان

چکیده

برای دانلود باید عضویت طلایی داشته باشید

برای دانلود متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Word Sense Disambiguation with Recurrent Neural Networks

This paper presents a neural network architecture for word sense disambiguation (WSD). The architecture employs recurrent neural layers and more specifically LSTM cells, in order to capture information about word order and to easily incorporate distributed word representations (embeddings) as features, without having to use a fixed window of text. The paper demonstrates that the architecture is...

متن کامل

Learning with recurrent neural networks

This thesis examines so-called folding neural networks as a mechanism for machine learning. Folding networks form a generalization of partial recurrent neural networks such that they are able to deal with tree structured inputs instead of simple linear lists. In particular, they can handle classical formulas { they were proposed originally for this purpose. After a short explanation of the neur...

متن کامل

Word-Phrase-Entity Recurrent Neural Networks for Language Modeling

The recently introduced framework of Word-Phrase-Entity language modeling is applied to Recurrent Neural Networks and leads to similar improvements as reported for n-gram language models. In the proposed architecture, RNN LMs do not operate in terms of lexical items (words), but consume sequences of tokens that could be words, word phrases or classes such as named entities, with the optimal rep...

متن کامل

Sequence Processing with Recurrent Neural Networks

Sequence processing involves several tasks such as clustering, classification, prediction, and transduction of sequential data which can be symbolic, non-symbolic or mixed. Examples of symbolic data patterns occur in modelling natural (human) language, while the prediction of water level of River Thames is an example of processing non-symbolic data. If the content of a sequence will be varying ...

متن کامل

Minimum Translation Modeling with Recurrent Neural Networks

We introduce recurrent neural networkbased Minimum Translation Unit (MTU) models which make predictions based on an unbounded history of previous bilingual contexts. Traditional back-off n-gram models suffer under the sparse nature of MTUs which makes estimation of highorder sequence models challenging. We tackle the sparsity problem by modeling MTUs both as bags-of-words and as a sequence of i...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

ژورنال

عنوان ژورنال: Information

سال: 2020

ISSN: 2078-2489

DOI: 10.3390/info11060320